20 research outputs found

    Development and Evaluation of an Ontology-Based Quality Metrics Extraction System

    Get PDF
    The Institute of Medicine reports a growing demand in recent years for quality improvement within the healthcare industry. In response, numerous organizations have been involved in the development and reporting of quality measurement metrics. However, disparate data models from such organizations shift the burden of accurate and reliable metrics extraction and reporting to healthcare providers. Furthermore, manual abstraction of quality metrics and diverse implementation of Electronic Health Record (EHR) systems deepens the complexity of consistent, valid, explicit, and comparable quality measurement reporting within healthcare provider organizations. The main objective of this research is to evaluate an ontology-based information extraction framework to utilize unstructured clinical text for defining and reporting quality of care metrics that are interpretable and comparable across different healthcare institutions. All clinical transcribed notes (48,835) from 2,085 patients who had undergone surgery in 2011 at MD Anderson Cancer Center were extracted from their EMR system and pre- processed for identification of section headers. Subsequently, all notes were analyzed by MetaMap v2012 and one XML file was generated per each note. XML outputs were converted into Resource Description Framework (RDF) format. We also developed three ontologies: section header ontology from extracted section headers using RDF standard, concept ontology comprising entities representing five quality metrics from SNOMED (Diabetes, Hypertension, Cardiac Surgery, Transient Ischemic Attack, CNS tumor), and a clinical note ontology that represented clinical note elements and their relationships. All ontologies (Web Ontology Language format) and patient notes (RDFs) were imported into a triple store (AllegroGraph?) as classes and instances respectively. SPARQL information retrieval protocol was used for reporting extracted concepts under four settings: base Natural Language Processing (NLP) output, inclusion of concept ontology, exclusion of negated concepts, and inclusion of section header ontology. Existing manual abstraction data from surgical clinical reviewers, on the same set of patients and documents, was considered as the gold standard. Micro-average results of statistical agreement tests on the base NLP output showed an increase from 59%, 81%, and 68% to 74%, 91%, and 82% (Precision, Recall, F-Measure) respectively after incremental addition of ontology layers. Our study introduced a framework that may contribute to advances in “complementary” components for the existing information extraction systems. The application of an ontology-based approach for natural language processing in our study has provided mechanisms for increasing the performance of such tools. The pivot point for extracting more meaningful quality metrics from clinical narratives is the abstraction of contextual semantics hidden in the notes. We have defined some of these semantics and quantified them in multiple complementary layers in order to demonstrate the importance and applicability of an ontology-based approach in quality metric extraction. The application of such ontology layers introduces powerful new ways of querying context dependent entities from clinical texts. Rigorous evaluation is still necessary to ensure the quality of these “complementary” NLP systems. Moreover, research is needed for creating and updating evaluation guidelines and criteria for assessment of performance and efficiency of ontology-based information extraction in healthcare and to provide a consistent baseline for the purpose of comparing alternative approaches

    Parallel and Distributed Execution of Model Management Programs

    Get PDF
    The engineering process of complex systems involves many stakeholders and development artefacts. Model-Driven Engineering (MDE) is an approach to development which aims to help curtail and better manage this complexity by raising the level of abstraction. In MDE, models are first-class artefacts in the development process. Such models can be used to describe artefacts of arbitrary complexity at various levels of abstraction according to the requirements of their prospective stakeholders. These models come in various sizes and formats and can be thought of more broadly as structured data. Since models are the primary artefacts in MDE, and the goal is to enhance the efficiency of the development process, powerful tools are required to work with such models at an appropriate level of abstraction. Model management tasks – such as querying, validation, comparison, transformation and text generation – are often performed using dedicated languages, with declarative constructs used to improve expressiveness. Despite their semantically constrained nature, the execution engines of these languages rarely capitalize on the optimization opportunities afforded to them. Therefore, working with very large models often leads to poor performance when using MDE tools compared to general-purpose programming languages, which has a detrimental effect on productivity. Given the stagnant single-threaded performance of modern CPUs along with the ubiquity of distributed computing, parallelization of these model management program is a necessity to address some of the scalability concerns surrounding MDE. This thesis demonstrates efficient parallel and distributed execution algorithms for model validation, querying and text generation and evaluates their effectiveness. By fully utilizing the CPUs on 26 hexa-core systems, we were able to improve performance of a complex model validation language by 122x compared to its existing sequential implementation. Up to 11x speedup was achieved with 16 cores for model query and model-to-text transformation tasks

    Towards optimisation of model queries : A parallel execution approach

    Get PDF
    The growing size of software models poses significant scalability challenges. Amongst these challenges is the execution time of queries and transformations. In many cases, model management programs are (or can be) expressed as chains and combinations of core fundamental operations. Most of these operations are pure functions, making them amenable to parallelisation, lazy evaluation and short-circuiting. In this paper we show how all three of these optimisations can be combined in the context of Epsilon: an OCL-inspired family of model management languages. We compare our solutions with both interpreted and compiled OCL as well as hand-written Java code. Our experiments show a significant improvement in the performance of queries, especially on large models

    Cone-Beam Computed Tomography for Evaluation of Apical Transportation in Root Canals Prepared by Two Rotary Systems

    Get PDF
    Introduction: Due to the importance of apical transportation during root canal preparation, the aim of the current study was to use cone-beam computed tomography (CBCT) to assess the extent of apical transportation caused by ProTaper and Mtwo files. Methods and Materials: Forty extracted maxillary first molars with 19-22 mm length and 20-40 degrees of curvature were selected. The mesiobuccal canals were prepared using either Mtwo or ProTaper rotary files (n=20). CBCT images were obtained before and after canal preparation to compare the apical transportation in different cross-sections of mesial and distal surfaces. The apical transportation values were analyzed using the SPSS software. The results were compared with student’s t-test and Mann-Whitney U test. Results: There was no significant difference in the extent of apical transportation between Mtwo and ProTaper systems in different canal cross-sections. The apical transportation value was less than 0.1 mm in most of the specimens, which was clinically acceptable. Conclusion: Considering the insignificant difference between the two systems, it can be concluded that both system have low rates of apical transportation and can be assuredly used in clinical settings

    Re-implementing Apache Thrift using model-driven engineering technologies : An experience report

    Get PDF
    In this paper we investigate how contemporary model-driven engineering technologies such as Xtext, EMF and Epsilon compare against mainstream techniques and tools (C++, flex and Bison) for the development of a complex textual modelling language and family of supporting code generators (Apache Thrift). Our preliminary results indicate that the MDE-based implementation delivers significant benefits in term of conciseness, coupling and cohesion

    Evaluation of the Prevalence of Complete Isthmii in Permanent Teeth Using Cone-Beam Computed Tomography

    Get PDF
    Introduction: The current study aimed at determining the prevalence of complete isthmii in permanent teeth, using cone-beam computed tomography (CBCT) in a selected Iranian community. Methods and Materials: In this cross sectional study, 100 CBCT images (from 58 female and 42 male patients) including 1654 teeth (809 maxillary and 845 mandibular teeth) were evaluated. Each tooth root was evaluated in axial plane (interval, 0.1 mm; thickness, 0.1 mm) from the orifice to the apex and from the apex to the orifice to detect the presence of complete isthmus. Scans of teeth with complete isthmii were reevaluated in axial, sagittal, and coronal planes with the thickness, 0.1 mm. Presence and absence of complete isthmii in each tooth was reported. The root canal was divided into 3 equal parts (cervical, middle and apical thirds), and isthmii were classified with respect to the start and end points. Findings were classified into 6 categories with respect to the start and end points of the isthmii: 1) the beginning and end in the cervical third; 2) the beginning in the cervical third and end in the middle third ; 3) the beginning in the cervical third and end in the apical third ; 4) the beginning and end in the middle third ; 5) the beginning in the middle third and end in the apical third and 5) the beginning and end in the apical third. Results: The prevalence of complete isthmus in permanent teeth was 8.6%, and the highest prevalence was reported in mesial roots of the mandibular first molars. In maxilla, the highest prevalence of complete isthmus was found in mesiobuccal roots of the maxillary first molars, whereas in canines and central incisors, no isthmii were detected. In the mandible, the lowest prevalence of isthmus was found in second premolars. In maxillary molars, isthmii starting and ending in the middle third of the root had the highest prevalence. On the other hand, isthmii in mandibular molars, from apical or middle third of the root beginning to the end of the apical third, had the highest prevalence. Conclusion: As the prevalence of complete isthmii was the highest in molars, endodontists should pay particular attention to accomplish a successful surgical or nonsurgical root canal therapy.Keywords: Cone-Beam Computed Tomography; Root Canal Anatomy; Root Canal Isthmu

    Distributed model validation with Epsilon

    Get PDF
    Scalable performance is a major challenge with current model management tools. As the size and complexity of models and model management programs increases and the cost of computing falls, one solution for improving performance of model management programs is to perform computations on multiple computers. In this paper, we demonstrate a low-overhead data-parallel approach for distributed model validation in the context of an OCL-like language. Our approach minimises communication costs by exploiting the deterministic structure of programs and can take advantage of multiple cores on each (heterogeneous) machine with highly configurable computational granularity. Our performance evaluation shows that the implementation is extremely low overhead, achieving a speed up of 24.5Ă— with 26 computers over the sequential case, and 122Ă— when utilising all six cores on each computer

    What is drought? The scientific understanding of drought: the primary step towards resolving Iran's water crisis

    No full text
    Abstract in Uncoded languages In this article we discuss four basic approaches to characterising droughts namely meteorological, hydrological, agricultural, and socioeconomic. In the first three approaches, drought is defined and measured as a physical phenomenon primarily related to the precipitation shortfall. Socioeconomic drought, nonetheless, is described and determined as the 'supply' and 'demand' in terms of water shortage in different socioeconomic systems. The specific case of Iran’s drought and water crisis is the main focus of this article, and is briefly compared to California’s ongoing drought. In cases such as Iran, the socioeconomic drought is a result of inefficient and unsustainable management of water resources. Therefore, we cannot simply associate droughts with climate variability and/or change. Furthermore, due to large uncertainties in climate modelling and water management scenarios, long-term prediction of drought is impossible

    Nash-reinforcement learning (N-RL) for developing coordination strategies in non-transferable utility games

    No full text
    Social (central) planning is normally used in the literature to optimize the system-wide efficiency and utility of multi-operator systems. Central planning tries to maximize system\u27s benefits by coordinating the operators\u27 strategies and reduce the externalities, assuming that all parties are willing to cooperate. This assumption implies that operators are willing to base their decisions based on group rationality rather than individual rationality, even if increased group benefits results in reduced benefits for some agents. This assumption limits the applicability of social planner\u27s solutions, as perfect cooperation among agents is often infeasible in real world. Recognizing the fact that decisions are normally based on individual rationality in human systems, cooperative game theory methods are normally employed to address the major limitation of social planner\u27s methods. Game theory methods revise the social planner\u27s solution such that not only group benefits are increased, but also there exists no agent whose cooperative gain is less than his noncooperative gain. However, in most cases, utility is assumed to be transferrable and the literature has not sufficiently focused on non-transferrable utility games. In such games parties are willing to cooperate and coordinate their strategies to increase their benefits, but have no ability to compensate each other to promote cooperation. To a good extent, the transferrable utility assumption is due to the complexity of calculations to find the best response strategies of agents in non-cooperative and cooperative modes, especially in multi-period games. By combining Reinforcement Learning and Nash bargaining solution, this paper develops a new method for applying cooperative game theory to complex multi-period nontransferrable utility games. For illustration, the suggested method is applied to two numerical examples in which two hydropower operators seek developing a fair and efficient cooperation mechanism to increase their gains
    corecore